Goto

Collaborating Authors

 brush stroke


Spline-FRIDA: Towards Diverse, Humanlike Robot Painting Styles with a Sample-Efficient, Differentiable Brush Stroke Model

Chen, Lawrence, Schaldenbrand, Peter, Shankar, Tanmay, Coleman, Lia, Oh, Jean

arXiv.org Artificial Intelligence

A painting is more than just a picture on a wall; a painting is a process comprised of many intentional brush strokes, the shapes of which are an important component of a painting's overall style and message. Prior work in modeling brush stroke trajectories either does not work with real-world robotics or is not flexible enough to capture the complexity of human-made brush strokes. In this work, we introduce Spline-FRIDA which can model complex human brush stroke trajectories. This is achieved by recording artists drawing using motion capture, modeling the extracted trajectories with an autoencoder, and introducing a novel brush stroke dynamics model to the existing robotic painting platform FRIDA. We conducted a survey and found that our open-source Spline-FRIDA approach successfully captures the stroke styles in human drawings and that Spline-FRIDA's brush strokes are more human-like, improve semantic planning, and are more artistic compared to existing robot painting systems with restrictive B\'ezier curve strokes.


Colour and Brush Stroke Pattern Recognition in Abstract Art using Modified Deep Convolutional Generative Adversarial Networks

Srinivasan, Srinitish, Pathak, Varenya

arXiv.org Artificial Intelligence

Abstract Art is an immensely popular, discussed form of art that often has the ability to depict the emotions of an artist. Many researchers have made attempts to study abstract art in the form of edge detection, brush stroke and emotion recognition algorithms using machine and deep learning. This papers describes the study of a wide distribution of abstract paintings using Generative Adversarial Neural Networks(GAN). GANs have the ability to learn and reproduce a distribution enabling researchers and scientists to effectively explore and study the generated image space. However, the challenge lies in developing an efficient GAN architecture that overcomes common training pitfalls. This paper addresses this challenge by introducing a modified-DCGAN (mDCGAN) specifically designed for high-quality artwork generation. The approach involves a thorough exploration of the modifications made, delving into the intricate workings of DCGANs, optimisation techniques, and regularisation methods aimed at improving stability and realism in art generation enabling effective study of generated patterns. The proposed mDCGAN incorporates meticulous adjustments in layer configurations and architectural choices, offering tailored solutions to the unique demands of art generation while effectively combating issues like mode collapse and gradient vanishing. Further this paper explores the generated latent space by performing random walks to understand vector relationships between brush strokes and colours in the abstract art space and a statistical analysis of unstable outputs after a certain period of GAN training and compare its significant difference. These findings validate the effectiveness of the proposed approach, emphasising its potential to revolutionise the field of digital art generation and digital art ecosystem.


Inventing art styles with no artistic training data

Abrahamsen, Nilin, Yao, Jiahao

arXiv.org Artificial Intelligence

We propose two procedures to create painting styles using models trained only on natural images, providing objective proof that the model is not plagiarizing human art styles. In the first procedure we use the inductive bias from the artistic medium to achieve creative expression. Abstraction is achieved by using a reconstruction loss. The second procedure uses an additional natural image as inspiration to create a new style. These two procedures make it possible to invent new painting styles with no artistic training data. We believe that our approach can help pave the way for the ethical employment of generative AI in art, without infringing upon the originality of human creators.


Segmentation-Based Parametric Painting

de Guevara, Manuel Ladron, Fisher, Matthew, Hertzmann, Aaron

arXiv.org Artificial Intelligence

We introduce a novel image-to-painting method that facilitates the creation of large-scale, high-fidelity paintings with human-like quality and stylistic variation. To process large images and gain control over the painting process, we introduce a segmentation-based painting process and a dynamic attention map approach inspired by human painting strategies, allowing optimization of brush strokes to proceed in batches over different image regions, thereby capturing both large-scale structure and fine details, while also allowing stylistic control over detail. Our optimized batch processing and patch-based loss framework enable efficient handling of large canvases, ensuring our painted outputs are both aesthetically compelling and functionally superior as compared to previous methods, as confirmed by rigorous evaluations. Code available at: https://github.com/manuelladron/semantic\_based\_painting.git


Inside the Studio With an AI-Guided Painting Robot

TIME - Tech

To help illustrate our cover story on how the AI arms race is changing the world, we reached out to award-winning AI artist Pindar Van Arman, who uses artificial intelligence to create his art. Van Arman, who built his first "painting robot" 15 years ago, uses deep learning neural networks, artificial intelligence, feedback loops, and computational creativity to guide his newer robots. As a result, the robots end up making a surprising number of independent aesthetic decisions in the course of painting each piece--putting a different spin on the idea of "generative" AI: artificial intelligence that doesn't just compute, but also creates. "My machines have grown beyond being simple assistants and are now effectively augmenting my own creativity, as well as having creativity of their own," says Van Arman. "They have become a generative AI art system so sophisticated that it has forced me to consider the possibility that all art is generative."


FRIDA: A Collaborative Robot Painter with a Differentiable, Real2Sim2Real Planning Environment

Schaldenbrand, Peter, McCann, James, Oh, Jean

arXiv.org Artificial Intelligence

Painting is an artistic process of rendering visual content that achieves the high-level communication goals of an artist that may change dynamically throughout the creative process. In this paper, we present a Framework and Robotics Initiative for Developing Arts (FRIDA) that enables humans to produce paintings on canvases by collaborating with a painter robot using simple inputs such as language descriptions or images. FRIDA introduces several technical innovations for computationally modeling a creative painting process. First, we develop a fully differentiable simulation environment for painting, adopting the idea of real to simulation to real (real2sim2real). We show that our proposed simulated painting environment is higher fidelity to reality than existing simulation environments used for robot painting. Second, to model the evolving dynamics of a creative process, we develop a planning approach that can continuously optimize the painting plan based on the evolving canvas with respect to the high-level goals. In contrast to existing approaches where the content generation process and action planning are performed independently and sequentially, FRIDA adapts to the stochastic nature of using paint and a brush by continually re-planning and re-assessing its semantic goals based on its visual perception of the painting progress. We describe the details on the technical approach as well as the system integration.


Xie

AAAI Conferences

Among various traditional art forms, brush stroke drawing is one of the widely used styles in modern computer graphic tools such as GIMP, Photoshop and Painter. In this paper, we develop an AI-aided art authoring (A4) system of non-photorealistic rendering that allows users to automatically generate brush stroke paintings in a specific artist's style. Within the reinforcement learning framework of brush stroke generation proposed, our contribution in this paper is to learn artists' drawing styles from video-captured stroke data by inverse reinforcement learning. Through experiments, we demonstrate that our system can successfully learn artists' styles and render pictures with consistent and smooth brush strokes.


What Are The Ethical Boundaries Of Digital Life Forever?

#artificialintelligence

Today artificial intelligence (AI) driven digital technologies are giving us new pathways to always have your loved ones with you, 7x24. Not really, despite the eeriness from Black Mirror episodes, or Carrie Fisher digitally created to carry on as Princess Leia in Star Wars, and Microsoft securing a patent for software that could reincarnate people as a chat bot, opening the door to more uses of AI contemplating how to bring the dead back to life are rapidly accelerating. Are we ready for death resurrections? Is this the right thing for us to be doing? From my research, we don't have all the answers to this complex question yet, but what we have are many innovators, academics, researchers shaping the answer to this question that will enable richer immersive digital learning experiences – and others that bringing grandma back to life – and persisting forever – may feel positively therapeutic to ease a deep grief, or feel like you are immersed in a Stephen King movie.


Can Machines Dream?

#artificialintelligence

Check out my GitHub for a working style transfer codebase. A question I'm sure you never thought of asking. Unless of course you and a friend have just watched iRobot at 4am. Nevertheless, here we are… in a world where machines are dreaming away… kind of. The good news is that you don't need to be worried.


Content Masked Loss: Human-Like Brush Stroke Planning in a Reinforcement Learning Painting Agent

Schaldenbrand, Peter, Oh, Jean

arXiv.org Artificial Intelligence

The objective of most Reinforcement Learning painting agents is to minimize the loss between a target image and the paint canvas. Human painter artistry emphasizes important features of the target image rather than simply reproducing it (DiPaola 2007). Using adversarial or L2 losses in the RL painting models, although its final output is generally a work of finesse, produces a stroke sequence that is vastly different from that which a human would produce since the model does not have knowledge about the abstract features in the target image. In order to increase the human-like planning of the model without the use of expensive human data, we introduce a new loss function for use with the model's reward function: Content Masked Loss. In the context of robot painting, Content Masked Loss employs an object detection model to extract features which are used to assign higher weight to regions of the canvas that a human would find important for recognizing content. The results, based on 332 human evaluators, show that the digital paintings produced by our Content Masked model show detectable subject matter earlier in the stroke sequence than existing methods without compromising on the quality of the final painting.